DATA QUALITY
Although care was taken to ensure that the results of the 2014-15 NHS are as accurate as possible, there are certain factors which may affect the reliability of the results and for which no adequate adjustments can be made. One such factor is known as sampling variability. Other factors are collectively referred to as non-sampling error. These factors, which are discussed below, should be kept in mind when interpreting results of the survey.
Sampling Variability
Since the estimates are based on information obtained from a sample of the population, they are subject to sampling variability (or sampling error), that is, they may differ from the figures that would have been obtained from an enumeration of the entire population, using the same questionnaires and procedures. The magnitude of the sampling error associated with a sample estimate depends on the following factors.
- Sample design - there are many different methods which could have been used to obtain a sample from which to collect data on health status, health-related actions and health risk factors. The final design attempted to make survey results as representative as possible within cost and operational constraints. Details of sample design are contained in Survey Design and Operation, under Sample Design and Selection.
- Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error.
- Population variability - the extent to which people differ on the particular characteristic being measured. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and, therefore, the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.
Measure of Sampling Variability
One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied because only a sample of dwellings was included. There are about two chances in three that the sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs. This is known as the margin of error (MoE) at the 95% confidence level. The margin of error at the 95% confidence level is expressed as 1.96 times the SE. The 95% confidence interval is the estimate +/- MoE i.e. the range from minus 1.96 times the SE to the estimate plus 1.96 times the SE.
Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate to which it relates. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate. More detail on the calculation of SEs, MoEs and RSEs can be found in the Technical Note.
Proportion estimates annotated with a hash (#) have a margin of error greater than 10%. Users should give the margin of error particular consideration when using this estimate.
Estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey. Estimates with RSEs greater than 25% but less than or equal to 50% are annotated by an asterisk (*) to indicate they are subject to high SEs relative to the size of the estimate and should be used with caution. Estimates with RSEs of greater than 50%, annotated by a double asterisk (**), are considered too unreliable for most purposes. These estimates can be aggregated with other estimates to reduce the overall sampling error.
Relative standard errors for estimates are published in 'direct' form. RSEs for estimates are calculated for each separate estimate and published individually using a replicate weights technique (Jackknife method). Direct calculation of RSEs can result in larger estimates having larger RSEs than smaller ones, since these larger estimates may have more inherent variability. More information about the replicate weights technique can be found in the Technical Note.
Standard Errors of Proportions, Differences and Sums
Proportions formed from the ratio of two estimates are also subject to sampling error. The size of the error depends on the accuracy of both the estimates.
The difference between, or sum of, two survey estimates (of numbers or percentages) is itself an estimate and is therefore also subject to sampling error. The SE of the difference between, or sum of, two survey estimates depends on their SEs and the relationship between them.
The formulas to approximate the RSE for proportions and the SE of the difference between, or sum of, two estimates can be found in the Technical Note.
Testing for Statistically Significant Differences
For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics, or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:(x-y)
________________
SE(x-y)
If the value of the test statistic is greater than 1.96, then we may say that we are 95% certain that there is a statistically significant difference between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.
Non-sampling Error
Lack of precision due to sampling variability should not be confused with inaccuracies that may occur for other reasons, such as errors in response and recording. Inaccuracies of this type are referred to as non-sampling error. This type of error is not specific to sample surveys and can occur in a census enumeration. The major sources of non-sampling error are:
- Errors related to scope and coverage
- Response errors due to incorrect interpretation or wording of questions
- Interviewer bias
- Bias due to non-response, because health status, health-related behaviour and other characteristics of non-responding persons may differ from responding persons
- Errors in processing such as mistakes in the recording or coding of the data obtained.
These sources of error are discussed below.
Errors Related to Scope and Coverage
Some dwellings may have been inadvertently included or excluded because, for example, the distinctions between whether they were private or non-private dwellings may have been unclear. All efforts were made to overcome such situations by constant updating of lists both before and during the survey. Also, some persons may have been inadvertently included or excluded because of difficulties in applying the scope rules concerning the identification of usual residents, and the treatment of some overseas visitors.
Response Errors
Response errors may have arisen from three main sources:
- Flaws in questionnaire design and methodology
- Flaws in interviewing technique
- Inaccurate reporting by the respondent.
Errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or poor overall survey design (for example, context effects where responses to a question are directly influenced by the preceding questions). In order to overcome problems of this kind, individual questions and, the questionnaire overall, were thoroughly tested before being finalised for use in the survey. A field test (Dress Rehearsal) of the 2014-15 NHS was conducted in South Australia from November to December 2013 which involved approximately 500 households. The purpose of the DR was to assess how well the new and changed content in the 2014-15 NHS questionnaire worked in the field, test some procedural elements and for a timing analysis to be completed.
As a result of the testing, modifications were made to question design, wording, ordering and associated prompt cards, and some changes were made to survey procedures
Although every effort was made to minimise response errors due to questionnaire design and content issues, some errors will inevitably have occurred in the final survey enumeration.
Reporting errors may also have resulted from interviewer and/or respondent fatigue (i.e. loss of concentration), particularly for those respondents reporting for both themselves and a child. Inaccurate reporting may also occur if respondents provide deliberately incorrect responses. While efforts were made to minimise errors arising from fatigue, or from deliberate misreporting or non-reporting by respondents, through emphasising the importance of the data and checks on consistency within the survey instrument, some instances will have inevitably occurred.
Reference periods used in relation to each topic were selected to suit the nature of the information being sought. In particular to strike the right balance between minimising recall errors and ensuring the period was meaningful and representative (from both respondent and data use perspectives), and would yield sufficient observations in the survey to support reliable estimates. It is possible that the reference periods did not suit every person for every topic and that difficulty with recall may have led to inaccurate reporting in some instances.
Lack of uniformity in interviewing standards may also result in non-sampling errors. Training programs, and checking of interviewers’ work were methods employed to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the survey questionnaire (see the Interviews section of the Data collection page). The operation of the Computer Assisted Instrument (CAI) itself, and the built in checks within it, ensure that data recording standards are maintained. Respondent perception of the personal characteristics of the interviewer can also be a source of error, as the age, sex, appearance or manner of the interviewer may influence the answers obtained.
Non-Response Bias
Non-response may occur when people cannot, or will not, cooperate in the survey, or cannot be contacted by interviewers. Non-response can introduce a bias to the results obtained insofar as non-respondents may have different characteristics and behaviour patterns in relation to their health to those persons who did respond. The magnitude of the bias depends on the extent of the differences and the level of non-response.
The 2014-15 NHS achieved an overall response rate of 82.0% (fully/adequate responding households, after sample loss). Data to accurately quantify the nature and extent of the differences in health characteristics between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the State, section of State (i.e. capital city and balance of state), sex and age group levels in the weighting process. Other disparities are not adjusted for.
Households with incomplete interviews were treated as fully responding for estimation purposes where the only questions that were not answered were legitimate 'don't know' or refusal options, or any or all questions on income or where weight and height were not obtained (as this data was imputed). These non-response items were coded to 'not stated'. With more than one person potentially selected from every household in the sample, and to maintain a healthy sample size for output, it was decided that any household with at least one fully responding selected person would be kept on file. This is consistent with 2011-12 NHS procedures. This resulted in 130 households originally coded to part responding (e.g. part refusal or part non-contact) being kept on file. Note that this is different to Health Surveys conducted prior to 2011-12, which have only retained households with complete surveys for all selected respondents.
Physical Measures Non-Response
The 2014-15 NHS collected measurements of height and weight (for BMI), and waist circumference of all persons 2 years and over, and blood pressure measurements of persons 18 years and over. These measurements were the only voluntary components of the survey and were therefore subject to a level of non-response. The 2014-15 NHS experienced a higher level of non-response across all measures compared to the 2011-12 NHS.
Limited qualitative data was available from the 2014-15 for non-response analysis, via Interviewer debriefing and comments entered by interviewers within the collection instrument which identified the following as key drivers for the non-response:
- A general refusal from respondents who were unhappy about survey participation; while they did not refuse the whole survey, the refusal of measurements was accompanied by refusals at other sensitive topics - namely: income, employer's name, and contact details for follow up.
- A number of respondents wanted to complete the interview as quickly as possible and were therefore unwilling to spend extra time having measurements taken.
- Some gender-related issues (e.g. single female households).
- A number of interviews were conducted in a public location and it was deemed "unsuitable" for measures to be taken in this situation. In some instances, this was also associated with gender or with culture .
- Some Occupational Health & Safety concerns meant the Interviewer did not attempt to take measurement (e.g. unsanitary conditions, concerns associated with age or frailty of the respondent, or the respondent's physical or mental health).
- Weight-related concerns (i.e. respondent sensitive about their weight).
- Equipment failure.
- Some respondents refused measurements but were happy to provide self-report height, weight, etc. This was particularly true for respondents who had recently had these measurements taken by a health professional, or where they regularly take their own measurements, and therefore found it unnecessary to have them taken again. Where collected the self-reported data was used for output.
Imputation of missing physical measures data was introduced in 2014-15 in order to ensure that estimates for BMI, waist circumference, and blood pressure, were in relation to the full population, rather than the measured population only (as per 2011-12). For further details see Imputation.
Processing Errors
Processing errors may occur at any stage between the initial collection of the data and the final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or may occur during the manipulation of raw data to produce the final survey data files. For example, in the course of deriving new data items from raw survey data, or during the estimation procedures or weighting of the data file.
To minimise the likelihood of these errors occurring, a number of quality assurance processes were employed.
- Comprehensive quality assurance procedures applied to the NHS coding of conditions, medications and alcohol data. Within the instruments, trigram coders were used to aid the interviewer with the collection of this data. This was complemented by manual coding of text fields where interviewers could not find an appropriate response in the coder.
- Computer editing. Edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present, and that specific values lay within certain ranges. These edits were designed to detect reporting and recording errors, incorrect relationships between data items, and missing data items. Many of these edits were triggered during the interview in order to correct errors or confirm responses at the time of interview.
- Data file checks. At various stages during processing (such as after computer editing and subsequent amendments, weighting of the file, and derivation of new data items), frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which might have significantly affected estimates, and illogical relationships not previously identified by edits. Further checks were conducted to ensure consistency between related data items, and between relevant populations.
- Comparison of data. Where possible, checks of the data were undertaken to ensure consistency of the survey outputs against results of previous NHSs and data available from other sources.
More information on these is included in each of the relevant sections.
Other Factors Affecting Estimates
In addition to data quality issues, there are a number of both general and topic-specific factors which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections of this Users' Guide.
Scope
The scope of the survey defines the boundaries of the population to which the estimates relate.
The most important aspect of the survey scope affecting the interpretation of estimates from this survey is that institutionalised persons (including inpatients of hospitals, nursing homes and other health institutions) and other persons resident in non-private dwellings (e.g. hotels, motels, boarding houses) were excluded from the survey.
Coverage
The 2014-15 NHS includes all geographic areas except very remote and migratory. All persons living in discrete Aboriginal and Torres Strait Islander communities in any geographic location were also excluded.
Personal Interview and Self-Assessment Nature of the Survey
The 2014-15 NHS was designed using personal or proxy (e.g. parent or guardian answering for a child, or a carer answering for a disabled person) interviews to obtain data on respondents’ own perceptions of their state of health, their use of health services and aspects of their lifestyle. The information obtained is therefore not necessarily based on any professional opinion (e.g. from a doctor, nurse, dentist, etc.) or on information available from records kept by respondents.
Concepts and Definitions
The scope of each topic and the concepts and definitions associated with individual pieces of information should be considered when interpreting survey results. This information is available for individual topics of this Users’ Guide.
Wording of Questions
To enable accurate interpretation of survey results it is essential to bear in mind the precise wording of questions used to collect individual items of data, particularly in those cases where the question involved ‘running prompts’ (where the interviewer reads from a list until the respondent makes a choice), or where a prompt card was used.
For example, testing has shown that reporting of medical conditions is improved where direct questions are asked about a specific condition or where conditions are specifically identified in a prompt card, and that data is less robust where it is up to the respondent to identify conditions in response to a general question. It is not possible or practical to mention all conditions in questions or prompts, therefore the approach taken in the 2014-15 NHS was to identify main conditions which include (where applicable):
- Asthma
- Arthritis
- Cancer
- Heart and circulatory conditions
- Diabetes mellitus
- Osteoporosis
- Kidney disease
- Mental, behavioural and cognitive conditions.
In the 2014-15 NHS an additional list of conditions was also provided as a prompt for other long-term conditions. As some conditions are specifically identified in the questionnaire and others are not, response levels and accuracy of condition reporting may be affected. Where the level and nature of condition identification has changed between surveys, comparability over time may be affected. Further information on the collection methodology for conditions can be found in the Health Conditions chapter of this Users’ Guide.
For further information on question wording please refer to the survey questionnaires available from the Downloads page of this product.
Reference Periods
All results should be considered within the context of the time references that apply to the various topics. Different reference periods were used for specific topics (e.g. 'in the last week' for alcohol consumption and exercise, 'ever' and 'in the last 12 months' for actions taken).
Although it can be expected that a larger section of the population would have reported taking a certain action if a longer reference period had been used, the increase is not proportionate to the increase in time. This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.
Classifications and Categories
The classifications and categories used in the survey provide an indication of the level of detail available in survey output. However, the ability of respondents to provide the data may limit the amount of detail that can be output. Where respondents may have used non-medical terminology, symptoms rather than conditions, or generic rather than specific terminology, conditions may only be able to be output in general terms (e.g. 'heart condition nfd' rather than 'Angina' or 'Atrial fibrillation'). Classifications used in the survey can be found in Appendix 3: ABS Standard Classifications. Survey specific classifications can be found in Appendix 2: Classification of Health Conditions.
Collection Period
The 2014-15 NHS was enumerated from 6 July 2014 to 4 July 2015. When considering survey results over time or comparing them with data from another source, care must be taken to ensure that any differences between the collection periods take into consideration the possible effect of those differences on the data, for example, seasonal differences and effects of holidays.
To take account of possible seasonal effects on health characteristics, the sample was spread randomly across the 12-month enumeration period. Analysis of previous health surveys has shown no particular seasonal bias across key estimates. |